Goto

Collaborating Authors

 ai collaboration


Human-AI Co-Embodied Intelligence for Scientific Experimentation and Manufacturing

Lin, Xinyi, Zhang, Yuyang, Gan, Yuanhang, Chen, Juntao, Shen, Hao, He, Yichun, Li, Lijun, Yuan, Ze, Wang, Shuang, Wang, Chaohao, Zhang, Rui, Li, Na, Liu, Jia

arXiv.org Artificial Intelligence

Scientific experiment and manufacture rely on complex, multi-step procedures that demand continuous human expertise for precise execution and decision-making. Despite advances in machine learning and automation, conventional models remain confined to virtual domains, while real-world experiment and manufacture still rely on human supervision and expertise. This gap between machine intelligence and physical execution limits reproducibility, scalability, and accessibility across scientific and manufacture workflows. Here, we introduce human-AI co-embodied intelligence, a new form of physical AI that unites human users, agentic AI, and wearable hardware into an integrated system for real-world experiment and intelligent manufacture. In this paradigm, humans provide precise execution and control, while agentic AI contributes memory, contextual reasoning, adaptive planning, and real-time feedback. The wearable interface continuously captures the experimental and manufacture processes, facilitates seamless communication between humans and AI for corrective guidance and interpretable collaboration. As a demonstration, we present Agentic-Physical Experimentation (APEX) system, coupling agentic reasoning with physical execution through mixed-reality. APEX observes and interprets human actions, aligns them with standard operating procedures, provides 3D visual guidance, and analyzes every step. Implemented in a cleanroom for flexible electronics fabrication, APEX system achieves context-aware reasoning with accuracy exceeding general multimodal large language models, corrects errors in real time, and transfers expertise to beginners. These results establish a new class of agentic-physical-human intelligence that extends agentic reasoning beyond computation into the physical domain, transforming scientific research and manufacturing into autonomous, traceable, interpretable, and scalable processes.


Development of Mental Models in Human-AI Collaboration: A Conceptual Framework

Holstein, Joshua, Satzger, Gerhard

arXiv.org Artificial Intelligence

Artificial intelligence has become integral to organizational decision - making and while research has explored many facets of this human - AI collaboration, the focus has mainly been on designing the AI agent(s) and the way the collaboration is set up -- generally assuming a human decision - maker to be "fixed". However, it has largely been neglected that decision - makers' mental models evolve through their continuous interaction with AI systems. This paper addresses this gap by conceptualizing how the design of human - AI collaboration influences the development of three complementary and interdependent mental models necessary for this collaboration. We develop an integrated socio - technical framework that identifies the mechanisms driving the mental model evolution: data contextualization, reasoning transparency, and performance feedback.


Human-AI Collaboration or Academic Misconduct? Measuring AI Use in Student Writing Through Stylometric Evidence

Oliveira, Eduardo Araujo, Mohoni, Madhavi, López-Pernas, Sonsoles, Saqr, Mohammed

arXiv.org Artificial Intelligence

Human - Artificial Intelligence (HAI) collaboration in writing offers opportunities to enhance efficiency and boost student confidence; however, it also carries risks, such as reduced creativity, over - reliance on AI - generated content, and academic integrity (Kim & Lee, 2023) . While the ethical use of AI in education is widely acknowledged as a way to enhance student learning (Cotton et al., 2023; Foltynek et al., 2023), the rise of Unauthorised Content Generation (UCG) presents a significant challenge to academic misconduct. Measuring the extent and nature of HAI collaboration in academic contexts remains a critical challenge for educators, particularly as generative AI (genAI) tools become increasingly available and integrated into educational settings (Atchley et al., 2024; E. Oliveira et al., 2023) . Distinguishing AI - generated text from human - authored content is necessary for understanding student learning behaviours, supporting skill development, and maintaining academic integrity. Analysing student writing patterns can help educators evaluate how st udents engage with AI tools, track their writing skill progression, and identify areas where additional support is needed (Pan et al., 2025). Existing detection tools for AI - assisted misconduct often lack reliability, explainability, and resilience to circ umvention strategies such as paraphrasing (Cotton et al., 2023) . These challenges highlight the need for innovative, transparent, and robust approaches to address the unacknowledged use of genAI in HAI collaboration within academic writing (Kasneci et al., 2023) .


Text Production and Comprehension by Human and Artificial Intelligence: Interdisciplinary Workshop Report

Speltz, Emily Dux

arXiv.org Artificial Intelligence

This report synthesizes the outcomes of a recent interdisciplinary workshop that brought together leading experts in cognitive psychology, language learning, and artificial intelligence (AI)-based natural language processing (NLP). The workshop, funded by the National Science Foundation, aimed to address a critical knowledge gap in our understanding of the relationship between AI language models and human cognitive processes in text comprehension and composition. Through collaborative dialogue across cognitive, linguistic, and technological perspectives, workshop participants examined the underlying processes involved when humans produce and comprehend text, and how AI can both inform our understanding of these processes and augment human capabilities. The workshop revealed emerging patterns in the relationship between large language models (LLMs) and human cognition, with highlights on both the capabilities of LLMs and their limitations in fully replicating human-like language understanding and generation. Key findings include the potential of LLMs to offer insights into human language processing, the increasing alignment between LLM behavior and human language processing when models are fine-tuned with human feedback, and the opportunities and challenges presented by human-AI collaboration in language tasks. By synthesizing these findings, this report aims to guide future research, development, and implementation of LLMs in cognitive psychology, linguistics, and education. It emphasizes the importance of ethical considerations and responsible use of AI technologies while striving to enhance human capabilities in text comprehension and production through effective human-AI collaboration.


Human aversion? Do AI Agents Judge Identity More Harshly Than Performance

Feng, Yuanjun, Chodhary, Vivek, Shrestha, Yash Raj

arXiv.org Artificial Intelligence

This study examines the understudied role of algorithmic evaluation of human judgment in hybrid decision-making systems, a critical gap in management research. While extant literature focuses on human reluctance to follow algorithmic advice, we reverse the perspective by investigating how AI agents based on large language models (LLMs) assess and integrate human input. Our work addresses a pressing managerial constraint: firms barred from deploying LLMs directly due to privacy concerns can still leverage them as mediating tools (for instance, anonymized outputs or decision pipelines) to guide high-stakes choices like pricing or discounts without exposing proprietary data. Through a controlled prediction task, we analyze how an LLM-based AI agent weights human versus algorithmic predictions. We find that the AI system systematically discounts human advice, penalizing human errors more severely than algorithmic errors--a bias exacerbated when the agent's identity (human vs AI) is disclosed and the human is positioned second. These results reveal a disconnect between AI-generated trust metrics and the actual influence of human judgment, challenging assumptions about equitable human-AI collaboration. Our findings offer three key contributions. First, we identify a reverse algorithm aversion phenomenon, where AI agents undervalue human input despite comparable error rates. Second, we demonstrate how disclosure and positional bias interact to amplify this effect, with implications for system design. Third, we provide a framework for indirect LLM deployment that balances predictive power with data privacy. For practitioners, this research emphasize the need to audit AI weighting mechanisms, calibrate trust dynamics, and strategically design decision sequences in human-AI systems.


Chatbot Teamwork Makes the AI Dream Work

WIRED

Turning to a friend or coworker can make tricky problems easier to tackle. Now it looks like having AI chatbots team up with each other can make them more effective. I've been playing this week with AutoGen, an open source software framework for AI agent collaboration developed by researchers at Microsoft and academics at Pennsylvania State University, the University of Washington, and Xidian University in China. The software taps OpenAI's large language model GPT-4 to let you create multiple AI agents with different personas, roles, and objectives that can be prompted to solve specific problems. To put the idea of AI collaboration to the test, I had two AI agents work together on a plan for how to write about AI collaboration.


Augmenting the Author: Exploring the Potential of AI Collaboration in Academic Writing

Tu, Joseph, Hadan, Hilda, Wang, Derrick M., Sgandurra, Sabrina A, Mogavi, Reza Hadi, Nacke, Lennart E.

arXiv.org Artificial Intelligence

This workshop paper presents a critical examination of the integration of Generative AI (Gen AI) into the academic writing process, focusing on the use of AI as a collaborative tool. It contrasts the performance and interaction of two AI models, Gemini and ChatGPT, through a collaborative inquiry approach where researchers engage in facilitated sessions to design prompts that elicit specific AI responses for crafting research outlines. This case study highlights the importance of prompt design, output analysis, and recognizing the AI's limitations to ensure responsible and effective AI integration in scholarly work. Preliminary findings suggest that prompt variation significantly affects output quality and reveals distinct capabilities and constraints of each model. The paper contributes to the field of Human-Computer Interaction by exploring effective prompt strategies and providing a comparative analysis of Gen AI models, ultimately aiming to enhance AI-assisted academic writing and prompt a deeper dialogue within the HCI community.


Will nationalism end global open-source AI collaboration?

#artificialintelligence

When Ben Wu, an engineer in China, wanted to install Facebook's open-source AI framework PyTorch in 2017, he visited its online community on GitHub and asked for some pointers. Soumith Chintala, a Facebook AI research engineer based in New York, showed him how he could download it quickly. PyTorch has become a foundational component of AI technology, thanks in large part to knowledge-sharing exchanges like the one between Wu and Chintala that happen every day. And although it's become increasingly corporatized, the borderless, open-source software movement has risen above geopolitical tensions between China and the U.S., which have centered on concerns over China's use of AI to carry out repressive surveillance, its plans to transfer civilian tech for military applications, and Chinese government espionage and intellectual property theft. "I'm definitely surprised at how much [of the] general global considerations you would have from a business angle don't really come in when you're talking about open-source collaboration, especially with AI," Chintala told Protocol in September when Facebook parent company Meta handed over PyTorch to the nonprofit open-source software consortium Linux Foundation.


Viettel Enters AI Collaboration with NVIDIA

#artificialintelligence

Viettel Group and NVIDIA signed a Memorandum of Understanding (MoU) to establish a strategic partnership on using Artificial Intelligence (AI) to advance Viettel and Vietnam's technology research and solutions. Viettel is the first Vietnam company and one of five in Asia to officially establish a strategic partnership with NVIDIA involving AI initiatives. NVIDIA is a global leader in AI hardware and software from edge to cloud computing. The company's technologies are used in 70% of the world's top 500 fastest supercomputers. Viettel will join NVIDIA's Partner Network, a global ecosystem of leading companies across industries, and expect to benefit from NVIDIA's expertise in opportunities for machine learning (ML) and AI research, ML/AI industry collaborations, and other strategic engagements.


Top 5 AI Collaborations Between Indian Govt And Tech Giants In 2019

#artificialintelligence

Prime Minister Narendra Modi's second term saw a slew of initiatives started in order to adapt cutting-edge technologies in the economy. The Interim Budget 2019 discussed the national program for the development of emerging technologies like robotics, IoT and AI, among others. However, the Indian think tank NITI Aayog has clearly been at the forefront of looking at the adoption of new-age technologies for bettering government service. In this article, we list down the top 5 AI collaborations between the Government of India and the tech giants in 2019. After signing a statement of intent (SoI) last year, this year in March, NITI Aayog joined hands with ABB and organised a workshop in order to help the MSMEs to understand how the Indian economy can be digitised with the help of emerging technologies like AI, big data and digital connectivity.